Minimum Noticeable Difference-Based Adversarial Privacy Preserving Image Generation

نویسندگان

چکیده

Deep learning models are found to be vulnerable adversarial examples, as wrong predictions can caused by small perturbation in input for deep models. Most of the existing works image generation try achieve attacks most models, while few them make efforts on guaranteeing perceptual quality examples. High examples matter many applications, especially privacy preserving. In this work, we develop a framework based Minimum Noticeable Difference (MND) concept generate preserving images that have minimum difference from clean ones but able attack To this, an loss is firstly proposed attacked successfully. Then, quality-preserving developed taking magnitude and perturbation-caused structural gradient changes into account, which aims preserve high generation. best our knowledge, first work exploring MND evaluate its performance terms quality, classification face recognition tested with method several anchor methods work. Extensive experimental results demonstrate capable generating remarkably improved metrics (e.g., PSNR, SSIM, MOS) than generated methods.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Privacy-Preserving Adversarial Networks

We propose a data-driven framework for optimizing privacy-preserving data release mechanisms toward the information-theoretically optimal tradeoff between minimizing distortion of useful data and concealing sensitive information. Our approach employs adversarially-trained neural networks to implement randomized mechanisms and to perform a variational approximation of mutual information privacy....

متن کامل

Learning Privacy Preserving Encodings through Adversarial Training

We present a framework to learn privacypreserving encodings of images (or other highdimensional data) to inhibit inference of a chosen private attribute. Rather than encoding a fixed dataset or inhibiting a fixed estimator, we aim to to learn an encoding function such that even after this function is fixed, an estimator with knowledge of the encoding is unable to learn to accurately predict the...

متن کامل

CNN Based Adversarial Embedding with Minimum Alteration for Image Steganography

Historically, steganographic schemes were designed in a way to preserve image statistics or steganalytic features. Since most of the state-of-the-art steganalytic methods employ a machine learning (ML) based classifier, it is reasonable to consider countering steganalysis by trying to fool the ML classifiers. However, simply applying perturbations on stego images as adversarial examples may lea...

متن کامل

Preserving Differential Privacy in Degree-Correlation based Graph Generation

Enabling accurate analysis of social network data while preserving differential privacy has been challenging since graph features such as cluster coefficient often have high sensitivity, which is different from traditional aggregate functions (e.g., count and sum) on tabular data. In this paper, we study the problem of enforcing edge differential privacy in graph generation. The idea is to enfo...

متن کامل

Cloud-based Privacy Preserving Image Storage, Sharing and Search

High-resolution cameras produce huge volume of high quality images everyday. It is extremely challenging to store, share and especially search those huge images, for which increasing number of cloud services are presented to support such functionalities. However, images tend to contain rich sensitive information (e.g., people, location and event), and people’s privacy concerns hinder their read...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Circuits and Systems for Video Technology

سال: 2023

ISSN: ['1051-8215', '1558-2205']

DOI: https://doi.org/10.1109/tcsvt.2022.3210010